Friday, December 20, 2024

Artificial Intelligence: A Visit to the Catastrophic Problems Café

One thing almost everyone creating, using, coding, regulating, or just plain writing or thinking about AI feels duty-bound to do is to consider the chance that the technology will destroy us.  I haven’t done that in print yet, so here I go.

In his broad-based On the Edge:  The Art of Risking Everything (Penguin Press, 2024), author Nate Silver devoted a 56-page chapter, “Termination,” to the chance that AI will obliterate humanity or almost so.  He said there was a wide range of what he called “p(doom)” opinions, or estimations of the chances of such an outcome.  He considered more precise definitions of doom – for example, does it mean that “every single member of the human species and all biological life on Earth dies,” or could it be only “the destruction of humanity’s long-term potential” or even “something where humans are kept in check” with “the people making the big calls” being “a coalition of AI systems”?  The averages Silver found for “domain experts” on AI itself, with it defined as “all but five thousand humans ceasing to exist by 2100,” were 8.8% by 2100, and 0.7% from “generalists who had historically been accurate when making other probabilistic predictions.”  The highest expert p(doom) named was “20 to 30 percent”, but there are certainly larger ones out there.

How would the technology do its dirty work?  One way was in “A.I. May Save Us, or May Construct Viruses to Kill Us” (The New York Times, July 27th).  Author Nicholas Kristof said that “for less than $100,000, it may now be possible to use artificial intelligence to develop a virus that could kill millions of people.”  That could happen through anything from bugs murdering indiscriminately all the way to something that “might be possible,” using DNA knowledge to create a product tailored to “kill or incapacitate” one specific person.  Kristof is a journalist, not a technician, but as much of AI thinking is conceptual now, his concerns are valid.

Another columnist in the New York Times soon thereafter came out with “Many People Fear A.I.  They Shouldn’t” (David Brooks, July 31st).  His view was that “many fears about A.I. are based on an underestimation of the human mind” – he cited “scholar” Michael Ignatieff as saying “what we do” was not algorithmic, but “a distinctively, incorrigibly human activity that is a complex combination of conscious and unconscious, rational and intuitive, logical and emotional reflection.”  He also wrote that while engineers claim to be “building machines that think like people,” per neuroscientists “that would be a neat trick, because we don’t know how people think.”

The next month, Greg Norman looked at the problem posed by Kristof above, in “Experts warn AI could generate ‘major epidemics or even pandemics’ – but how soon?” (Fox News, August 28th).  Stemming from “a paper published in the journal Science by co-authors from Johns Hopkins University, Stanford University and Fordham University,” exposure to “substantial quantities of biological data, from speeding up drug and vaccine design to improving crop yields” creates a worry.  Although “today’s AI models likely do not “substantially contribute” to biological risks,” the chance that “essential ingredients to create highly concerning advanced biological models may already exist or soon will” could cause problems.

All of this depends, though, on what AI is allowed to access.  It is and will be able to formulate detailed deadly plans, but what from there?  A Stanford undergraduate, John A. Phillips, in 1976 wrote and submitted a term paper giving detailed plans for assembling an atomic bomb, with all information from readily available public sources.  Although one expert said it would have had about an even chance of detonating, it was never built.  That, for me, is why my p(doom) is very low, less than a tenth of one percent.  There is no indication that AI models can build things by themselves in the physical world. 

So far, we are doing well at containing AI.  As for the future, Silver said that, if given a chance to “permanently and irrevocably” stop its progress, he would not, as, ultimately, “civilization needs to learn to live with the technology we’ve built, even if that means committing ourselves to a better set of values and institutions.”  We can deal with artificial intelligence – a vastly more difficult challenge we face is dealing with ourselves.  That’s the last word.  With that, it’s time to leave the café.

Friday, December 13, 2024

Electric Vehicles: Sparse Commentary, But Still Worthwhile

Since August, I’ve been trying to find enough articles to justify an EV post, but not much is coming out.  We have news about Norway, with its shorter driving distances and politically liberal and freely government-obeying people, moving toward banning or restricting, maybe prohibitively, internal combustion ones, but those things don’t apply in the United States, so such policies can’t reasonably be seen as role models for us.  In the meantime, what’s the best of the slim pickings of second-half 2024’s published pieces?

The oldest was Jack Ewing’s “Electric Cars Help the Climate.  But Are They Good Value?” (July 29th, The New York Times).  The author addresses “factors to consider, many of which depend on your driving habits and how important it is to you to reduce your impact on the environment.”  He wrote that “it’s hard to say definitively how long batteries will remain usable,” and although they “do lose range over time,” “the degradation is very slow.”  As for “resale value,” “on average, electric vehicles depreciated by 49 percent over five years, compared to 39 percent for all,” hurt by “steep price cuts” on new ones.  Fueling and maintenance, though, can both be cheaper, as, per the Environmental Protection Agency, one electric pickup truck model will cost less than half as much to fuel as its gasoline or diesel version.  Such vehicles also do not need oil changes, spark plugs, or muffler replacements, and overall “have fewer moving parts to break down,” although with heavy batteries they need tires more often.  Ewing, though, did not mention driving range, certainly a concern varying greatly between consumers.

I have advocated hybrid vehicles as the best of both worlds, so was surprised and disappointed to see “Why the hype for hybrid cars will not last” (The Economist, September 17th).  The piece does not consider vehicles not needing external electric charging, dealing only with those here called “plug-in electric vehicles (PHEVS).”  As of press time, “carmakers have been cooling on (non-hybrid electrics) and warming to hybrids, which are especially profitable, with buyers thinking of them as “cheap,” as they need much smaller batteries.  The uncredited author expected that they will be less common as California and the entire European Union prevent, in the next decade, their sale.

Another advantage of electric cars we may not have anticipated is that “It Turns Out Charging Stations Are Cash Cows For Nearby Businesses” (Rob Stumpf, InsideEVs.com, September 24th).  I passed along years ago the idea that if driverless vehicles took over, smoking would drop, as so many people buy cigarettes at gas stations – this is sort of the other side.  “EV charging stations aren’t just better for the environment; they’re also money printers.  And it’s not just charging network providers who see green – so do nearby shops.”  The facilities seem to be benefiting businesses as far as a mile away, with “coffee shops” and other “places where people can kill 20 or so minutes” doing especially well because of this “dwell time.”  Watch this trend – it will become more and more focused, unless, somehow, charging takes no longer than a fill-up does today.  And perhaps coffee consumption will go up.

Last, we have “6 Common EV Myths and How to Debunk Them” (Jonathan N. Gitlin, Wired.com, November 16th).  How true are these actually seven areas of concern?  Gitlin wrote that “charging an EV takes too long” is invalid, since those unhappy with 18 to 20-minute times can be “curmudgeons,” and people can recharge at home while the car is idle.  However, “I can’t charge it at home” is reasonable, as “if you cannot reliably charge your car at home or at work – and I mean reliably (in bold) – you don’t really have any business buying a plug-in vehicle yet.”  “An EV is too expensive” fails since “75 percent of American car buyers buy used cars,” and “used EVs can be a real bargain.”  Weather concerns are no worse than with other vehicles, but he admitted that “I need 600 miles of uninterrupted range” doesn’t have “a good rebuttal,” though at least one electric model is now good for almost 500.  “They’re bad for the environment” does not apply to carbon dioxide emissions or in localities with little electricity coming from coal.  “We don’t have enough electricity” – well, yes, we do, for everything except artificial intelligence model creation.

Overall, very little has been decided about the future of electric and hybrid vehicles in America.  But, with time, there will be more.  Stay tuned.

Friday, December 6, 2024

More Jobs but a Down Report – AJSN Showed Latent Demand Off a Bit To 15.8 Million

This morning’s Bureau of Labor Statistics Employment Situation Summary was touted as important, with last time’s tiny nonfarm payroll positions gain expected to strongly rebound and the whole thing affecting the Federal Reserve’s interest rate decision less than two weeks away. 

On the number of jobs, the report delivered.  The five estimates I saw were all between 200,000 and 215,000, and the result was 227,000.  Otherwise, outcomes were largely a sea of red.  Seasonally adjusted and unadjusted unemployment each gained 0.1% to get to 4.2% and 4.0%.  The total adjusted number of jobless rose 100,000 to 7.1 million, with the unadjusted variety up 75,000 to 6,708,000.  The unadjusted count of employed was off 482,000 to 161.456 million.  There were 100,000 more long-term unemployed – 1.7 million out of work for 27 weeks or longer.  The two statistics best showing how common it is for Americans to be working or one step away, the employment-population ratio and the labor force participation rate, worsened 0.2% and 0.1% to get to 59.8% and 62.5%.  The only two of the front-line results I track to improve were the count of people working part-time for economic reasons, or keeping such arrangements while looking thus far unsuccessfully for full-time ones, which lost 100,000 to 4.5 million, and average hourly private nonfarm payroll earnings, up 15 cents, well over inflation, to $35.61.

The American Job Shortage Number, the measure showing how many additional positions could be quickly filled if all knew they would be easy to get, dropped just over 60,000 to reach the following:


The largest change since October was the 215,000 effect of the 269,000 fall of those wanting work but not searching for it for a year or more.  The contribution of those officially unemployed was up 69,000, and no other statistic above affected the AJSN more than 39,000.  The share from joblessness was 38.1%, rising 0.5%.

Compared with a year before, the AJSN gained 180,000, the most activity from mostly offsetting higher unemployment and fewer expatriates. 

How can we summarize this report?  Torpid.  Despite the 227,000 net new jobs, fewer people are working, and we see what looks like many previously not trying to get jobs for a year or more declaring themselves uninterested.  We know how flexible that category really is, and this time it flexed up, with much more than the aging population responsible.  We really did not go anywhere this time, so the Fed clearly should grab the final 2024 interest rate cut.  As for the turtle, he did not move.

Friday, November 29, 2024

Artificial Intelligence – Three-Plus Months of Problems and Perceptions, With Hope

As I predicted at the end of last year, AI has found a home in many niches.  But does it seem capable of justifying its $1 trillion economy?

Per “Artificial intelligence is losing hype” on August 19th, The Economist had concerns – or did it?  The editorial piece’s subtitle, “For some, that is proof that the tech will in time succeed.  Are they right?,” leaves open that AI expectations, especially those not backed up by reasonable data and judicious use of extrapolation, calming down may even predict its broader triumph.  It told us that “according to the latest data from the Census Bureau, only 5.1% of American companies use AI to produce goods and services, down from a high of 5.4% early this year.”  The article compared AI investments with 1800’s British “railway fever,” which, only after it caused an investment bubble, was justified as firms, “using the capital they had raised during the mania, built the track out, connecting Britain from top to bottom and transforming the economy.”  Could that happen with AI?

On September 21st, the same publication, in “The breakthrough AI needs,” considered what might be required for AI to be comprehensively and gigantically successful, and came up with using more “creativity” to end “resource constraints,” and “from giving ideas and talent the space to flourish at home, not trying to shut down rivals abroad,” as “the AI universe could contain a constellation of models, instead of just a few superstars.”  It is indeed clear that now picking the most successful AI companies of 2050 could be no more accurate than using 1900 information to determine the premier automakers during the industry’s mid-1920s gains.

Most pessimistic of all was “Will A.I. Be a Bust?  A Wall Street Skeptic Rings the Alarm” (Tripp Mickle, The New York Times, September 23rd).  The doubter, Goldman Sachs stock research head Jim Covello, wrote three months before “that generative artificial intelligence, which can summarize text and write software code, makes so many mistakes that it was questionable whether it would ever reliably solve complex problems.”  The “co-head of the firm’s geopolitical advisory business… urged him to be patient,” resulting in “private bull-and-bear debates” between the two, but the issue, within as well as outside Goldman Sachs, remained partisan and unresolved.

Back to The Economist, where on November 9th appeared “A nasty case of pilotitis,” subtitled “companies are struggling to scale up generative AI.”  Although, per the piece, “fully 39% of Americans now say they use” AI, the share of companies remained near 5%, many of which appeared “to be suffering from an acute form of pilotitis, dilly-dallying with pilot projects without fully implementing the technology.”  Managements seemed to become “embarrassed if they moved too quickly and damaged their firm’s reputation(s),” and have also been held back by cost, “messy data” needing consolidation, and an AI-skill shortage.  Deloitte research “found that the share of senior executives with a “high” or “very high” level of interest in generative AI had fallen to 63%, down from 74% in the first quarter of the year, suggesting that the “new-technology shine” may be wearing off,” and one CIO’s “boss told him to stop promising 20% productivity improvements unless he was first prepared to cut his own department’s headcount by a fifth.”

Another AI issue, technical instead of organizational, was described in “Big leaps to baby steps” in the November 12th Insider Today, and started with “OpenAI’s next artificial intelligence model, Orion, reportedly isn’t showing the massive leap in improvement previous versions have enjoyed.”  Company testers said Orion’s improvement was “only moderate and smaller than what users saw going from GPT-3 to GPT-4.”  With high costs and power and data limitations still looming, a shrinking capability exponent could serve to eliminate future releases. 

Six days later, the same source described “A Copilot conundrum,” in that, even one year after its release, Microsoft’s so-named “flagship AI product” has been “coming up short on the big expectations laid out for it.”  An executive there told a Business Insider AI expert “that Copilot offers useful results about 10% of the time.”  Yet the software does have its adherents, including Lumen Technologies’ management forecasting “$50 million in annual savings from its sales team’s use of Copilot.”

An overall problem stemming from the above is that “Businesses still aren’t fully ready for AI, surveys show” (Patrick Kulp, Tech Brew, November 22nd).  “Indices attempting to gauge how companies have fared at reworking operations around generative AI have been piling up lately – and the verdict is mixed.”  While AI’s shortcomings are real and documentable, many firms “are still organizing their IT infrastructure.”  Reasons mentioned here were “culture and data challenges, as well as a lack of necessary talent and skills,” causing “nearly half of companies” to “report that AI challenges have fallen short of expectations across top priorities.”  So, if AI is overall now a failure, more than its producers are to blame.

A final AI course proposal came from Kai-Fu Lee in Wired.com on November 26th: “How Do You Get to Artificial General Intelligence:  Think Lighter.”  The idea here was to build “models and apps” that are “purpose-built for commercial applications using leaner models and innovative architecture,” thereby costing “a fraction to train and achieve levels of performance good enough for consumers and enterprises,” instead of making massive, comprehensive large language models which end up costing vastly more per query to use.  It may even be that different apps can use different AI sources which can somehow be combined.  That would be more difficult to organize, but the stakes are high.

This final article points up the thesis of the September 21st piece above – AI will need creativity in ways less emphasized in the industry.  Companies will need to think outside the boxes they have built and maintained.  There are real opportunities for those doing that best to earn billions or more.  Then, and only then, may artificial intelligence reach its potential.  Designers and executives stopped from exiting through the sides by the massive issues above will need to find ways of escaping through the top or bottom – or through another dimension.  Can they do that?  We will see.

Friday, November 15, 2024

Seven Weeks of Artificial Intelligence Investments, Revenue, and Spending, and What They Tell Us

A massive amount of money is being spent on developing, preparing for, buying, and implementing AI.  What has it caused, and how does AI now look overall?

Before the articles below came out “Is the AI bubble actually bursting?” (Patrick Kulp, Tech Brew, August 8th).  Concerns here were that “a stock market rout and big questions about spending continue to stoke worries,” that “some high-profile reports this summer questioned AI’s money-making potential relative to its enormous cost,” that “Microsoft, Alphabet, and Meta didn’t do much to soothe investors seeking temperance in AI capital expenditures,” and that we have reason to expect “a “major course correction” in AI hype as revenues fail to keep pace with spending.”

Since then, there have been strong and weak AI financial outcomes.  On August 23rd, Courtney Vien told us, in CFO Brew, “How Walmart’s seen ROI on gen AI.”  “During its last earnings call, the giant retailer reported 4.8% revenue growth, bolstered by 21% growth in its e-commerce function,” which “Walmart executives credited… to several factors… but one stood out:  generative AI.”  The technology had helped with “populating and cleaning up” the company’s gargantuan “product catalog,” of which the new version has also “given Walmart more insight into its customers.”  AI has also been “driving its impulse sales” through improved “cross-category search.”

Another such success story was the subject of “Nvidia’s earnings beat Wall Street’s estimates as AI momentum continues” (Eric Revell, Fox Business, August 28th).  In its “second-quarter earnings report,” earnings per share reached $0.68 instead of the projected $0.64, and revenue came in at $30.04 billion instead of $28.70 billion.  Although it started production of a new AI-dedicated chip, the Blackwell, demand for the current Hopper version has “remained strong.”

A major consumer of Nvidia’s chips rates to buy many more, as “OpenAI Is Growing Fast and Burning Through Piles of Money” (Mike Isaac and Erin Griffith, The New York Times, September 27th).  Although that firm “has been telling investors that it is making billions from its chatbot,” “it has not been quite so clear about how much it is losing.”  While “OpenAI’s monthly revenue hit $300 million in August, it “expects to lose roughly $5 billion this year after paying for costs related to running its services and other expenses like employee salaries and office rent.”  It spends most, though, on “the computing power it gets through a partnership with Microsoft, which is also OpenAI’s primary investor.”  Even if company projections showing a much brighter future will come to pass, OpenAI’s financial present is dark.

On industry results, Matt Turner reported those from the previous week from five of the largest companies in the November 3rd “Insider Today” in Business Insider.  Overall, he said they were “beating estimates and committing billions to AI.”  Alphabet’s Google-branded “cloud business benefited from AI adoption, posting a 35% year-over-year increase in revenues.”  Amazon did the same, with AI-assisted cloud revenues growing 19%.  Apple’s loss of Chinese revenue “left investors underwhelmed,” and it is uncertain if “new Apple Intelligence features help juice sales.”  “Meta beat estimates, though user growth came in below expectations,” and CEO Mark Zuckerberg “promised to keep spending on AI.”  Microsoft also did better than they expected, “but concerns around capacity constraints in AI” hurt investor reactions.  Overall, AI seemed to be producing real money for these firms, but related revenue growth has hardly been explosive.

A useful summary, “How companies are spending on AI right now,” by Patrick Kulp, came out on November 12th in Tech Brew.  In an in-effect response to the first article above, also written by Kulp, the piece started with “Despite some worry about a possible AI bubble earlier in the year, businesses are continuing to spend on generative technology – and investors are still eyeing it as a growth area.”  Another conclusion here was that of “AI becoming an office staple” with 38% third-quarter-on-second-quarter growth of “business spending on AI vendors.”  Although “half of the top 10 fastest growing enterprise software vendors on the platform were AI startups,” “OpenAI’s ChatGPT still reigns supreme,” but companies buying that have been increasingly likely to get other firms’ products as well.  Additionally, we have “AI still fueling VC growth,” as “three-quarters of limited partners surveyed… said they plan to increase AI investments in the next 12 months, with cybersecurity, predictive analytics, and data centers garnering the most interest.”  Note that “autonomous vehicles and computer vision ranked last for sub-fields of AI catching investor attention.”  Yet, per an Accenture report, there has been a “productivity flatline,” despite more AI use, over the past year.

What does all this reveal about artificial intelligence?  It is not vaporware.  Demand for it is real, in fact huge.  For some applications it is strongly objectively beneficial.  But it still has problems, with, along with many more mentioned in previous posts, profitability and productivity.  We don’t know how comprehensive its advantages will turn out to be.  But it is real, and it is progressing.  From there, we will just need to stay tuned.

Friday, November 8, 2024

Artificial Intelligence Regulation – Disjointed, and Too Soon

Over the past three months, there have been several reports on how, or even whether, AI should be legally constrained.  What did they say?

On the issue of its largest supplier, there was “As Regulators Close In, Nvidia Scrambles for a Response” (Tripp Mickle and David McCabe, The New York Times, August 6th).  It’s not surprising that this company, which not only is doing a gigantic amount of business but “by the end of last year… had more than a 90 percent share of (AI-building) chips sold around the world,” had drawn “government scrutiny.”  It has come from China, the United Kingdom, and the European Union as well as the United States Justice Department, causing Nvidia to start “developing a strategy to respond to government interest.”  Although, per a tech research firm CEO, “there’s no evidence they’re doing anything monopolistic or anticompetitive,” “the conditions are right because of their market leadership,” and “in the wake of complaints about Nvidia’s chokehold on the market, Washington’s concerns have shifted from China to competition, with everyone from start-up founders to Elon Musk grumbling about the company’s influence.”  It will not be easy for either the company or the governments.

Meanwhile, “A California Bill to Regulate A.I. Causes Alarm in Silicon Valley” (Cade Metz and Cecilia Kang, The New York Times, August 14th).  The legislation “that could impose restrictions on artificial intelligence,” was then “still winding its way through the state capital,” and “would require companies to test the safety of powerful A.I. technologies before releasing them to the public.”  It could also, per its opposition, “choke the progress of technologies that promise to increase worker productivity, improve health care and fight climate change” and are in their infancies, pointing toward real uncertainty in how they will affect people.  Per leginfo.com, it was vetoed by state governor Gavin Newsom, who said “by focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology.  Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 - at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good.”  Expect a different but related bill in California soon.

A thoughtful overview, “Risks and regulations,” came out in the August 24th Economist.  It stated that “artificial intelligence needs regulation.  But what kind, and how much?,” and came up with various ideas.  It started with the point that AI’s “best-known risk is embodied by the killer robots in the “Terminator” films – the idea that AI will turn against its human creators,” the kind of risk that some people think is “largely speculative,” and others think is less important than “real risks posed by AI that exist today, such as bias, discrimination, AI-generated disinformation and violation of intellectual-property rights.”  With Chinese authorities most wanting to “control the flow of information” and the European Union’s now-the-law AI Act which “is mostly a product-safety document which regulates applications of the technology according to how risky they are,” “different governments take different approaches to regulating AI.”  Combined with most American legislation being from states, international and even national accord seem a long way off.

What can we gain from “Rethinking ‘Checks and Balances’ for the A.I. Age” (Steve Lohr, The New York Times, September 24th)?  Recalling the Federalist Papers, a Stanford University project, now with 12 essays known as the Digitalist Papers, “contends that today is a broadly similar historical moment of economic and political upheaval that calls for a rethinking of society’s institutional arrangements.”  The writings’ “overarching concern” is that “a powerful new technology… explodes onto the scene and threatens to transform, for better or worse, all legacy social institutions,” therefore “citizens need to be more involved in determining how to regulate and incorporate A.I. into their lives.”  This effort seems designed to be a starting point, as, before, we have no more idea how, if it meets its high-importance expectations, AI will affect society than we did about cars in 1900.

Overall, per Garrison Lovely in the September 29th New York Times, it may be that “Laws Need to Catch Up to Artificial Intelligence’s Unique Risks.”  Or not.  Over the past year, OpenAI has been in controversy about its safety practices, and, per Lovely, federal “protections are essential for an industry that works so closely with such exceptionally risky technology.”  As before, we do not have enough agreement between governments to do that now, but the day will come.  Sooner?  Later?  We do not know, but someday, we hope, we can get together on this potentially critical issue.

Friday, November 1, 2024

Today’s Jobs Report Didn’t Go Much of Anywhere – AJSN Latent Demand Down To 15.9 Million on Lower Number of Expatriates

This morning’s Bureau of Labor Statistics Employment Situation Summary was supposed to show a greatly reduced number of net new nonfarm payroll positions, but at 12,000 it didn’t even approach the 110,000 and 115,000 published estimates.  How did the other figures turn out?

Seasonally adjusted and unadjusted unemployment stayed the same, at 4.1% and 3.9% respectively, with the adjusted count of jobless up 200,000 to 7 million.  Of those there were 1.6 million long-term, or without work for 27 weeks or longer, down 100,000.  Those working part-time for economic reasons, or holding short-hours positions while seeking full-time ones, remained at 4.6 million.  The two measures of how common it is for Americans to be working or officially unemployed, the labor force participation rate and the employment-population ratio, both worsened, coming in at 62.6% and 60.0% for drops of 0.1% and 0.2%.  The unadjusted number of employed was off 108,000 to 161,938,000.  Better, though, were private nonfarm payroll wages, which gained 10 cents, more than inflation, to $35.46 per hour. 

The American Job Shortage Number or AJSN, the metric showing how many additional positions could be quickly filled if all knew they would be easy to get, fell 736,000, almost all from a much-reduced estimate of the number of Americans living outside the United States, as follows:


The share of the AJSN from official unemployment rose 2.3% to 37.6%.  Compared with a year before, the loss of 900,000 from the expatriates’ contribution was mostly offset by 480,000 more from unemployment and 154,000 from those not looking for work for the previous year, with other changes small, for a 247,000 fall. 

What happened this time?  To judge that, we next look at the measures telling us how many people left or entered the workforce.  Those were a 469,000 rise in the count of those claiming no interest in a job, and 219,000 more overall not in the labor force.  There was also a consistent shrinkage in the categories of marginal attachment, the 3rd through 6th and 8th rows above.  Those departing workers were why our unemployment rates didn’t worsen, given fewer new positions than our population increase could absorb.  October’s deficiency, possibly created mostly from storms and sudden layoffs, may well greatly reverse itself next time, but it is in the books.  Accordingly, I saw the turtle take a small step backwards.